In medical image segmentation, it is often necessary to collect opinions from multiple experts to make the final decision. This clinical routine helps to mitigate individual bias. But when data is multiply annotated, standard deep learning models are often not applicable. In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels. Inspired by the iterative half-quadratic optimization, the proposed MrPrism will combine the multi-rater confidences assignment task and calibrated segmentation task in a recurrent manner. In this recurrent process, MrPrism can learn inter-observer variability taking into account the image semantic properties, and finally converges to a self-calibrated segmentation result reflecting the inter-observer agreement. Specifically, we propose Converging Prism (ConP) and Diverging Prism (DivP) to process the two tasks iteratively. ConP learns calibrated segmentation based on the multi-rater confidence maps estimated by DivP. DivP generates multi-rater confidence maps based on the segmentation masks estimated by ConP. The experimental results show that by recurrently running ConP and DivP, the two tasks can achieve mutual improvement. The final converged segmentation result of MrPrism outperforms state-of-the-art (SOTA) strategies on a wide range of medical image segmentation tasks.
translated by 谷歌翻译
Different from the general visual classification, some classification tasks are more challenging as they need the professional categories of the images. In the paper, we call them expert-level classification. Previous fine-grained vision classification (FGVC) has made many efforts on some of its specific sub-tasks. However, they are difficult to expand to the general cases which rely on the comprehensive analysis of part-global correlation and the hierarchical features interaction. In this paper, we propose Expert Network (ExpNet) to address the unique challenges of expert-level classification through a unified network. In ExpNet, we hierarchically decouple the part and context features and individually process them using a novel attentive mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part feature for the subsequent abstraction and memorizes a context-related embedding. Then we fuse the final focal embedding with all memorized context-related embedding to make the prediction. Such an architecture realizes the dual-track processing of partial and global information and hierarchical feature interactions. We conduct the experiments over three representative expert-level classification tasks: FGVC, disease classification, and artwork attributes classification. In these experiments, superior performance of our ExpNet is observed comparing to the state-of-the-arts in a wide range of fields, indicating the effectiveness and generalization of our ExpNet. The code will be made publicly available.
translated by 谷歌翻译
Diffusion probabilistic model (DPM) recently becomes one of the hottest topic in computer vision. Its image generation application such as Imagen, Latent Diffusion Models and Stable Diffusion have shown impressive generation capabilities, which aroused extensive discussion in the community. Many recent studies also found it useful in many other vision tasks, like image deblurring, super-resolution and anomaly detection. Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff. In order to enhance the step-wise regional attention in DPM for the medical image segmentation, we propose dynamic conditional encoding, which establishes the state-adaptive conditions for each sampling step. We further propose Feature Frequency Parser (FF-Parser), to eliminate the negative effect of high-frequency noise component in this process. We verify MedSegDiff on three medical segmentation tasks with different image modalities, which are optic cup segmentation over fundus images, brain tumor segmentation over MRI images and thyroid nodule segmentation over ultrasound images. The experimental results show that MedSegDiff outperforms state-of-the-art (SOTA) methods with considerable performance gap, indicating the generalization and effectiveness of the proposed model.
translated by 谷歌翻译
临床上,病变/组织的准确注释可以显着促进疾病诊断。例如,对眼底图像的视盘/杯/杯(OD/OC)的分割将有助于诊断青光眼诊断,皮肤镜图像上皮肤病变的分割有助于黑色素瘤诊断等。随着深度学习技术的发展,广泛的方法证明了病变/组织分割还可以促进自动疾病诊断模型。但是,现有方法是有限的,因为它们只能捕获图像中的静态区域相关性。受视觉变压器的全球和动态性质的启发,在本文中,我们提出了分割辅助诊断变压器(SeaTrans),以将分割知识转移到疾病诊断网络中。具体而言,我们首先提出了一种不对称的多尺度相互作用策略,以将每个单个低级诊断功能与多尺度分割特征相关联。然后,采用了一种称为海块的有效策略,以通过相关的分割特征使诊断特征生命。为了模拟分割诊断的相互作用,海块首先根据分段信息通过编码器嵌入诊断功能,然后通过解码器将嵌入的嵌入回到诊断功能空间中。实验结果表明,关于几种疾病诊断任务的海洋侵蚀超过了广泛的最新(SOTA)分割辅助诊断方法。
translated by 谷歌翻译
眼底图像的视盘(OD)和视杯(OC)的分割是青光眼诊断的重要基本任务。在临床实践中,通常有必要从多位专家那里收集意见,以获得最终的OD/OC注释。这种临床常规有助于减轻单个偏见。但是,当数据乘以注释时,标准深度学习模型将不适用。在本文中,我们提出了一个新型的神经网络框架,以从多评价者注释中学习OD/OC分割。分割结果通过迭代优化多评价专家的估计和校准OD/OC分割来自校准。这样,提出的方法可以实现这两个任务的相互改进,并最终获得精制的分割结果。具体而言,我们提出分化模型(DIVM)和收敛模型(CONM)分别处理这两个任务。 CONM基于DIVM提供的多评价专家图的原始图像。 DIVM从CONM提供的分割掩码中生成多评价者专家图。实验结果表明,通过经常运行CONM和DIVM,可以对结果进行自校准,从而超过一系列最新的(SOTA)多评价者分割方法。
translated by 谷歌翻译
预训练对于深度学习模型的表现至关重要,尤其是在有限的培训数据的医学图像分析任务中。但是,现有的预训练方法是不灵活的,因为其他网络体系结构不能重复使用一个模型的预训练权重。在本文中,我们提出了一个体系结构 - 无限量化器,它可以在一次预先训练后才良好地初始化任何给定的网络体系结构。所提出的初始器是一个超网络,将下游体系结构作为输入图,并输出相应体系结构的初始化参数。我们通过多种医学成像方式,尤其是在数据限制的领域中,通过广泛的实验结果来展示高档化器的有效性和效率。此外,我们证明,可以将所提出的算法重复使用,作为同一模态的任何下游体系结构和任务(分类和分割)的有利的插件初始化器。
translated by 谷歌翻译
随着深度学习技术的发展,从底眼图像中提出了越来越多的方法对视盘和杯子(OD/OC)进行分割。在临床上,多位临床专家通常会注释OD/OC细分以减轻个人偏见。但是,很难在多个标签上训练自动化的深度学习模型。解决该问题的一种普遍做法是多数投票,例如,采用多个标签的平均值。但是,这种策略忽略了医学专家的不同专家。通过观察到的观察,即在临床上通常将OD/OC分割用于青光眼诊断,在本文中,我们提出了一种新的策略,以通过青光眼诊断性能融合多评分者OD/OC分割标签。具体而言,我们通过细心的青光眼诊断网络评估每个评估者的专业性。对于每个评估者,其对诊断的贡献将被反映为专家图。为了确保对不同青光眼诊断模型的专家图是一般性的,我们进一步提出了专家生成器(EXPG),以消除优化过程中的高频组件。基于获得的专家图,多评价者标签可以融合为单个地面真相,我们将其称为诊断第一基地真相(diagfirstgt)。实验结果表明,通过将diagfirstgt用作地面真相,OD/OC分割网络将预测具有优质诊断性能的面膜。
translated by 谷歌翻译
The combination of transformers and masked image modeling (MIM) pre-training framework has shown great potential in various vision tasks. However, the pre-training computational budget is too heavy and withholds the MIM from becoming a practical training paradigm. This paper presents FastMIM, a simple and generic framework for expediting masked image modeling with the following two steps: (i) pre-training vision backbones with low-resolution input images; and (ii) reconstructing Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images. In addition, we propose FastMIM-P to progressively enlarge the input resolution during pre-training stage to further enhance the transfer results of models with high capacity. We point out that: (i) a wide range of input resolutions in pre-training phase can lead to similar performances in fine-tuning phase and downstream tasks such as detection and segmentation; (ii) the shallow layers of encoder are more important during pre-training and discarding last several layers can speed up the training stage with no harm to fine-tuning performance; (iii) the decoder should match the size of selected network; and (iv) HOG is more stable than RGB values when resolution transfers;. Equipped with FastMIM, all kinds of vision backbones can be pre-trained in an efficient way. For example, we can achieve 83.8%/84.1% top-1 accuracy on ImageNet-1K with ViT-B/Swin-B as backbones. Compared to previous relevant approaches, we can achieve comparable or better top-1 accuracy while accelerate the training procedure by $\sim$5$\times$. Code can be found in https://github.com/ggjy/FastMIM.pytorch.
translated by 谷歌翻译
网络架构在基于深度学习的计算机视觉系统中起关键作用。广泛使用的卷积神经网络和变压器将图像视为网格或序列结构,该网格或序列结构并非灵活以捕获不规则和复杂的对象。在本文中,我们建议将图像表示为图形结构,并引入新的视觉GNN(VIG)体系结构,以提取视觉任务的图形级特征。我们首先将图像拆分为许多被视为节点的补丁,然后通过连接最近的邻居来构造图形。根据图像的图表表示,我们构建了VIG模型以在所有节点之间转换和交换信息。 VIG由两个基本模块组成:用于汇总和更新图形信息的图形卷积的图形模块,以及带有两个线性层的FFN模块用于节点特征转换。 VIG的各向同性和金字塔体系结构均具有不同的型号。关于图像识别和对象检测任务的广泛实验证明了我们的VIG架构的优势。我们希望GNN关于一般视觉任务的开创性研究将为未来的研究提供有用的灵感和经验。 pytorch代码可在https://github.com/huawei-noah/effficity-ai-backbones上获得,Mindspore代码可在https://gitee.com/mindspore/models上获得。
translated by 谷歌翻译
变压器网络对计算机视觉任务取得了很大的进步。变压器 - 变压器(TNT)架构利用内部变压器和外部变压器提取本地和全局表示。在这项工作中,我们通过引入两个先进的设计:1)金字塔架构和2)卷积阀。通过建立分层表示,新的“金字塔”显着改善了原始TNT。Pyramidtnt比以前的最先进的视觉变压器(如Swin Transformer)实现更好的表演。我们希望这一新基线能够有助于视觉变压器的进一步研究和应用。代码将在https://github.com/huawei-noah/cv-backbones/tree/master/tnt_pytorch获得。
translated by 谷歌翻译